Image captioning is one of the straightforward tasks that can take advantage of large-scale web-crawled data which provides rich knowledge about the visual world for a captioning model. However, since web-crawled data contains image-text pairs that are aligned at different levels, the inherent noises (e.g., misaligned pairs) make it difficult to learn a precise captioning model. While the filtering strategy can effectively remove noisy data, however, it leads to a decrease in learnable knowledge and sometimes brings about a new problem of data deficiency. To take the best of both worlds, we propose a noise-aware learning framework, which learns rich knowledge from the whole web-crawled data while being less affected by the noises. This is achieved by the proposed quality controllable model, which is learned using alignment levels of the image-text pairs as an additional control signal during training. The alignment-conditioned training allows the model to generate high-quality captions of well-aligned by simply setting the control signal to desired alignment level at inference time. Through in-depth analysis, we show that our controllable captioning model is effective in handling noise. In addition, with two tasks of zero-shot captioning and text-to-image retrieval using generated captions (i.e., self-retrieval), we also demonstrate our model can produce high-quality captions in terms of descriptiveness and distinctiveness. Code is available at \url{https://github.com/kakaobrain/noc}.
translated by 谷歌翻译
We tackle open-world semantic segmentation, which aims at learning to segment arbitrary visual concepts in images, by using only image-text pairs without dense annotations. Existing open-world segmentation methods have shown impressive advances by employing contrastive learning (CL) to learn diverse visual concepts and adapting the learned image-level understanding to the segmentation task. However, these methods based on CL have a discrepancy since it only considers image-text level alignment in training time, while the segmentation task requires region-text level alignment at test time. In this paper, we propose a novel Text-grounded Contrastive Learning (TCL) framework to directly align a text and a region described by the text to address the train-test discrepancy. Our method generates a segmentation mask associated with a given text, extracts grounded image embedding from the masked region, and aligns it with text embedding via TCL. The framework addresses the discrepancy by letting the model learn region-text level alignment instead of image-text level alignment and encourages the model to directly improve the quality of generated segmentation masks. In addition, for a rigorous and fair comparison, we present a unified evaluation protocol with widely used 8 semantic segmentation datasets. TCL achieves state-of-the-art zero-shot segmentation performance with large margins in all datasets. Code is available at https://github.com/kakaobrain/tcl.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Neural Architecture Search (NAS) for automatically finding the optimal network architecture has shown some success with competitive performances in various computer vision tasks. However, NAS in general requires a tremendous amount of computations. Thus reducing computational cost has emerged as an important issue. Most of the attempts so far has been based on manual approaches, and often the architectures developed from such efforts dwell in the balance of the network optimality and the search cost. Additionally, recent NAS methods for image restoration generally do not consider dynamic operations that may transform dimensions of feature maps because of the dimensionality mismatch in tensor calculations. This can greatly limit NAS in its search for optimal network structure. To address these issues, we re-frame the optimal search problem by focusing at component block level. From previous work, it's been shown that an effective denoising block can be connected in series to further improve the network performance. By focusing at block level, the search space of reinforcement learning becomes significantly smaller and evaluation process can be conducted more rapidly. In addition, we integrate an innovative dimension matching modules for dealing with spatial and channel-wise mismatch that may occur in the optimal design search. This allows much flexibility in optimal network search within the cell block. With these modules, then we employ reinforcement learning in search of an optimal image denoising network at a module level. Computational efficiency of our proposed Denoising Prior Neural Architecture Search (DPNAS) was demonstrated by having it complete an optimal architecture search for an image restoration task by just one day with a single GPU.
translated by 谷歌翻译
Autonomous navigation in crowded spaces poses a challenge for mobile robots due to the highly dynamic, partially observable environment. Occlusions are highly prevalent in such settings due to a limited sensor field of view and obstructing human agents. Previous work has shown that observed interactive behaviors of human agents can be used to estimate potential obstacles despite occlusions. We propose integrating such social inference techniques into the planning pipeline. We use a variational autoencoder with a specially designed loss function to learn representations that are meaningful for occlusion inference. This work adopts a deep reinforcement learning approach to incorporate the learned representation for occlusion-aware planning. In simulation, our occlusion-aware policy achieves comparable collision avoidance performance to fully observable navigation by estimating agents in occluded spaces. We demonstrate successful policy transfer from simulation to the real-world Turtlebot 2i. To the best of our knowledge, this work is the first to use social occlusion inference for crowd navigation.
translated by 谷歌翻译
当人类与机器人互动时,不可避免地会影响。考虑一辆在人类附近行驶的自动驾驶汽车:自动驾驶汽车的速度和转向将影响人类驾驶方式。先前的作品开发了框架,使机器人能够影响人类对所需行为的影响。但是,尽管这些方法在短期(即前几个人类机器人相互作用)中有效,但我们在这里探索了长期影响(即同一人与机器人之间的重复相互作用)。我们的主要见解是,人类是动态的:人们适应机器人,一旦人类学会预见机器人的行为,现在影响力的行为可能会失败。有了这种见解,我们在实验上证明了一种普遍的游戏理论形式主义,用于产生有影响力的机器人行为,而不是重复互动的有效性降低。接下来,我们为Stackelberg游戏提出了三个修改,这些游戏使机器人的政策具有影响力和不可预测性。我们最终在模拟和用户研究中测试了这些修改:我们的结果表明,故意使他们的行为更难预期的机器人能够更好地维持对长期互动的影响。在此处查看视频:https://youtu.be/ydo83cgjz2q
translated by 谷歌翻译
为了经济部署机器人操纵器,机器人动作的编程和执行必须迅速。为此,我们提出了一种基于新颖的,基于约束的方法,以直观地指定顺序操作任务,并为这种任务规范计算时间优势的机器人运动。我们的方法遵循基于约束的任务规范的思想,目的是建立最小和以对象为中心的任务描述,该描述在很大程度上与基础机器人运动学无关。我们将此任务描述转换为非线性优化问题。通过解决此问题,我们获得了(本地)最佳的机器人运动,而不仅仅是用于单个运动,还用于整个操作序列。我们在一系列涉及五个不同的机器人模型(包括高度冗余的移动操纵器)的实验中证明了我们方法的功能。
translated by 谷歌翻译
机器人需要多种互动模式来与人类在复杂的工业任务中进行稳健合作。我们开发了共存和共存(可可)人类机器人协作系统。共存模式使机器人能够在共享空间中独立地与人类在不同子任务上合作。合作模式使机器人能够遵循人类的指导并恢复失败。人类意图跟踪算法将人类和机器人运动测量作为输入,并提供了交互模式的开关。我们证明了可可系统在用例中类似于现实世界多步组件任务的有效性。
translated by 谷歌翻译
在线联合学习(OFL)是一个有前途的框架,可以协作学习一系列非线性功能(或模型),从分布式流数据传入到多个客户端,同时保留其本地数据的隐私。在此框架中,我们首先通过将在线梯度下降(OGD)纳入事实汇总方法(命名为fedAvg),首先构建一种香草方法(命名为ofedavg)。尽管具有最佳的渐近性能,但OFEDAVG还是遭受了大量的沟通开销和长期学习延迟。为了解决这些缺点,我们通过随机量化和间歇性传播提出了一种通信效率OFL算法(命名为OfeDQIT)。我们对理论上的主要贡献是证明,超过$ t $ time插槽可以实现最佳的sublinear遗憾绑定$ \ mathcal {o} {o}(\ sqrt {t})$用于任何真实数据(包括非IID数据),同时大大降低沟通开销。此外,即使在网络中一次参与网络中的一小部分客户(处理时间更快和高质量的通信渠道),仍然可以保证这种最优性。我们的分析表明,OFEDQIT成功地解决了OFEDAVG的缺点,同时保持了卓越的学习准确性。使用真实数据集的实验证明了我们的算法对各种在线分类和回归任务的有效性。
translated by 谷歌翻译
Intonations play an important role in delivering the intention of a speaker. However, current end-to-end TTS systems often fail to model proper intonations. To alleviate this problem, we propose a novel, intuitive method to synthesize speech in different intonations using predefined intonation templates. Prior to TTS model training, speech data are grouped into intonation templates in an unsupervised manner. Two proposed modules are added to the end-to-end TTS framework: an intonation predictor and an intonation encoder. The intonation predictor recommends a suitable intonation template to the given text. The intonation encoder, attached to the text encoder output, synthesizes speech abiding the requested intonation template. Main contributions of our paper are: (a) an easy-to-use intonation control system covering a wide range of users; (b) better performance in wrapping speech in a requested intonation with improved objective and subjective evaluation; and (c) incorporating a pre-trained language model for intonation modelling. Audio samples are available at https://srtts.github.io/IntoTTS.
translated by 谷歌翻译